13 research outputs found

    Hacia la conciencia social del consumo energético en centros de datos

    Full text link
    Ante el problema creciente del consumo en los centros de datos, unido a la adopción paulatina de las mejores prácticas actuales para mejorar la eficiencia energética, se hace imprescindible un cambio radical en el enfoque de la energía en dichos centros de datos para poder seguir reduciendo de manera significativa su impacto medioambiental. En este artículo presentamos una propuesta inicial para la optimización integral del consumo de energía en centros de datos, que ha sido validado en un escenario de monitorización poblacional de salud, con ahorros de hasta un 50% frente al estado del arte en eficiencia energética. Defendemos una conciencia global del Estado y el comportamiento térmico del centro de datos, utilizando modelos predictivos para anticipar las variables determinantes para la optimización. Además, las estrategias de optimización energética de los centros de datos del futuro tienen que ser sociales: los distintos elementos (servidores, software de gestión, sistemas de refrigeración) deben tener cierta conciencia del estado de los demás elementos del sistema y de cómo el entorno los puede perjudicar o favorecer, buscando el consenso en estrategias colaborativas para reducir el consumo total

    On the leakage-power modeling for optimal server operation

    Get PDF
    Leakage power consumption is a com- ponent of the total power consumption in data cen- ters that is not traditionally considered in the set- point temperature of the room. However, the effect of this power component, increased with temperature, can determine the savings associated with the careful management of the cooling system, as well as the re- liability of the system. The work presented in this paper detects the need of addressing leakage power in order to achieve substantial savings in the energy consumption of servers. In particular, our work shows that, by a careful detection and management of two working regions (low and high impact of thermal- dependent leakage), energy consumption of the data- center can be optimized by a reduction of the cooling budget

    A novel energy-driven computing paradigm for e-health scenarios

    Get PDF
    A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality

    A cyber-physical approach to combined HW-SW monitoring for improving energy efficiency in data centers

    Get PDF
    High-Performance Computing, Cloud computing and next-generation applications such e-Health or Smart Cities have dramatically increased the computational demand of Data Centers. The huge energy consumption, increasing levels of CO2 and the economic costs of these facilities represent a challenge for industry and researchers alike. Recent research trends propose the usage of holistic optimization techniques to jointly minimize Data Center computational and cooling costs from a multilevel perspective. This paper presents an analysis on the parameters needed to integrate the Data Center in a holistic optimization framework and leverages the usage of Cyber-Physical systems to gather workload, server and environmental data via software techniques and by deploying a non-intrusive Wireless Sensor Net- work (WSN). This solution tackles data sampling, retrieval and storage from a reconfigurable perspective, reducing the amount of data generated for optimization by a 68% without information loss, doubling the lifetime of the WSN nodes and allowing runtime energy minimization techniques in a real scenario

    Self-Organizing maps for detecting abnormal thermal behavior in data centers

    Get PDF
    The increasing success of Cloud Computing applications and online services has contributed to the unsustainability of data center facilities in terms of energy consumption. Higher resource demand has increased the electricity required by computation and cooling resources, leading to power shortages and outages, specially in urban infrastructures. Current energy reduction strategies for Cloud facilities usually disregard the data center topology, the contribution of cooling consumption and the scalability of optimization strategies. Our work tackles the energy challenge by proposing a temperature-aware {VM} allocation policy based on a {Trust-and-Reputation} System ({TRS}). A {TRS} meets the requirements for inherently distributed environments such as data centers, and allows the implementation of autonomous and scalable {VM} allocation techniques. For this purpose, we model the relationships between the different computational entities, synthesizing this information in one single metric. This metric, called reputation, would be used to optimize the allocation of {VMs} in order to reduce energy consumption. We validate our approach with a state-of-the-art Cloud simulator using real Cloud traces. Our results show considerable reduction in energy consumption, reaching up to 46.16\% savings in computing power and 17.38\% savings in cooling, without {QoS} degradation while keeping servers below thermal redlining. Moreover, our results show the limitations of the {PUE} ratio as a metric for energy efficiency. To the best of our knowledge, this paper is the first approach in combining {Trust-and-Reputation} systems with Cloud Computing {VM} allocation

    Proactive power and thermal aware optimizations for energy-efficient cloud computing

    Full text link
    La computación en la nube, o Cloud computing, aborda el problema del alto coste de las infraestructuras de computación, proporcionando elasticidad al aprovisionamiento dinámico de recursos. Este paradigma de computación está basado en el pago por uso y actualmente se considera como una alternativa válida a la adquisición de un cluster de computación de altas prestaciones (HPC). Existen dos principales incentivos que hacen atractivo a este paradigma emergente: en primer lugar, los modelos basados en el pago por uso proporcionados por la nube permiten que los clientes paguen sólo por el uso que realizan de la infraestructura, aumentando la satisfacción de los usuarios; por otra parte, el acceso a los recursos de la nube requiere una inversión relativamente baja. La demanda computacional en los centros de datos está aumentando debido a la creciente popularidad de las aplicaciones Cloud. Sin embargo, estas instalaciones se están volviendo insostenibles en términos de consumo de potencia y debido al precio al alza de la energía. Hoy en día, la industria de los centros de datos consume un 2% de la producción mundial de energía [2]. Además, la proliferación de centros de datos urbanos está generando una demanda de energía cada vez mayor, representando el 70% del consumo en áreas metropolitanas, superando la densidad de potencia soportada por la red eléctrica [3]. En dos o tres años, esta situación supondrá cortes en el suministro en el 95% de los centros de datos urbanos incurriendo en unos costes anuales de alrededor de US2millonesporinfraestructura[4].Ademaˊsdelimpactoeconoˊmico,elcalorylahuelladecarbonogeneradosporlossistemasderefrigeracioˊndeloscentrosdedatosestaˊnaumentandodraˊsticamenteyseesperaqueen2020hayansuperadoalasemisionesdelaindustriaaeˊrea[5].Elmodelodecomputacioˊnenlanubeestaˊayudandoamitigaresteproblema,reduciendolahuelladecarbonoportareaejecutadaydisminuyendolasemisionesdeCO2[6],medianteelaumentodelautilizacioˊnglobaldeloscentrosdedatos.SeguˊnelinformedeSchneiderElectricsobrevirtualizacioˊnyeficienciaenergeˊticadelacomputacioˊnenlanube[7],estemodelodecomputacioˊnofreceunareduccioˊndel172 millones por infraestructura [4]. Además del impacto económico, el calor y la huella de carbono generados por los sistemas de refrigeración de los centros de datos están aumentando drásticamente y se espera que en 2020 hayan superado a las emisiones de la industria aérea [5]. El modelo de computación en la nube está ayudando a mitigar este problema, reduciendo la huella de carbono por tarea ejecutada y disminuyendo las emisiones de CO2 [6], mediante el aumento de la utilización global de los centros de datos. Según el informe de Schneider Electric sobre virtualización y eficiencia energética de la computación en la nube [7], este modelo de computación ofrece una reducción del 17% en el consumo de energía, compartiendo recursos informáticos entre todos los usuarios. Sin embargo, los proveedores de la nube necesitan implementar una gestión eficiente de la energía y de los recursos computacionales para satisfacer la creciente demanda de sus servicios garantizando la sostenibilidad. Las principales fuentes de consumo de energía en centros de datos se deben a las infraestructuras de computación y refrigeración. Los recursos de computación representan alrededor del 60% del consumo total, donde la disipación de potencia estática de los servidores es la contribución dominante. Por otro lado, la infraestructura de refrigeración origina alrededor del 30% del consumo total para garantizar la fiabilidad de la infraestructura de computación [8]. El factor clave que afecta a los requisitos de refrigeración es la temperatura máxima alcanzada en los servidores debido a su actividad, en función de la temperatura ambiente así como de la asignación de carga de trabajo. El consumo estático de los servidores representa alrededor del 70% de la potencia de los recursos de computación [9]. Este problema se intensifica con la influencia exponencial de la temperatura en las corrientes de fugas. Estas corrientes de fugas suponen una contribución importante del consumo total de energía en los centros de datos, la cual no se ha considerado tradicionalmente en la definición de la temperatura de de la sala. Sin embargo, el efecto de esta contribución de energía, que aumenta con la temperatura, puede determinar el ahorro asociado a la gestión proactiva del sistema de refrigeración. Uno de los principales desafíos para entender la influencia térmica en la componente de energía estática en el ámbito del centro de datos consiste en la descripción de las ventajas y desventajas entre las corrientes de fugas y el consumo de refrigeración. El modelo de computación en la nube está ayudando a mitigar el problema de consumo estático desde dos perspectivas basadas en la asignación de máquinas virtuales (MVs) y en su consolidación. En primer lugar, las políticas conscientes de la potencia reducen el consumo estático mediante el aumento de la utilización global, por lo que el conjunto de servidores operativos puede reducirse. El escalado dinámico de frecuencia y tensión (DVFS) se aplica para reducir el consumo de energía de los servidores. Por otra parte, las estrategias conscientes de la temperatura ayudan a la reducción de los puntos calientes en la infraestructura de computación mediante la difusión de la carga de trabajo, por lo que la temperatura ambiente de la sala se pueden aumentar con el consiguiente ahorro en el consumo de la refrigeración. Ambos enfoques tienen el potencial de mejorar la eficiencia energética en instalaciones de la nube. Desafortunadamente, estas políticas no se aplican de manera conjunta debido a la falta de modelos que incluyan parámetros relativos a la potencia y a la temperatura simultáneamente. Derivar modelos de energía rápidos y precisos que incorporen estas características permitiría combinar ambas estrategias, conscientes de la potencia y la temperatura, en una gestión global eficiente de la energía. Por otra parte, las aplicaciones características de la computación en la nube tienen que cumplir unos requisitos específicos en términos de tiempo de ejecución que están previamente contratados mediante el acuerdo de nivel de servicio (SLA). Es por esto que la optimización del consumo de energía en estos centros de datos tiene que considerar el cumplimiento de este contrato siempre que sea posible. Además, a diferencia de HPC, las cargas de trabajo de la nube varían significativamente con el tiempo, por lo que la asignación óptima y la configuración del DVFS no es una tarea trivial. Uno de los retos más importantes para garantizar la calidad de servicio de estas aplicaciones consiste en analizar la relación entre la consolidación y el rendimiento de la carga de trabajo, ya que facilitaría la combinación del DVFS con las estrategias térmicas y energéticas. El principal objetivo de esta tesis doctoral se centra en abordar el desafío de la energía en centros de datos dedicados a la computación en la nube desde una perspectiva térmica y con conciencia de la potencia utilizando estrategias proactivas. Nuestro trabajo propone el diseño e implementación de modelos y optimizaciones globales que consideren conjuntamente el consumo de energía tanto de los recursos informáticos y de refrigeración, manteniendo la calidad de servicio, desde una nueva perspectiva holística. Contribuciones clave: Para apoyar la tesis de que nuestra investigación puede proporcionar un valor significativo en el ámbito de la eficiencia energética en la computación en la nube, en comparación con enfoques tradicionales, nosotros hemos: • Definido una taxonomía en el área de la eficiencia energética que se compone de diferentes niveles de abstracción que aparecen en el ámbito de los centros de datos. • Clasificado propuestas del estado del arte de acuerdo a nuestra taxonomía, identificando posibles contribuciones, desde una perspectiva holística. • Identificado el compromiso entre las fugas de potencia y el consumo de refrigeración basado en un estudio empírico. • Propuesto nuevas técnicas de modelado para la identificación automática de modelos precisos y rápidos, proporcionando una validación en entorno real. • Analizado el compromiso entre DVFS, rendimiento y consumo en el entorno de computación en la nube. • Diseñado e implementado una nueva política de optimización proactiva para la consolidación dinámica de máquinas virtuales que combina DVFS y estrategias conscientes de la potencia, manteniendo la calidad de servicio. • Derivado modelos térmicos para procesador y memoria validados en un entorno real. • Diseñado e implementado nuevas políticas proactivas que incorporan consideraciones de DVFS, térmicas y de potencia en para el consumo de las infraestructuras de computación y refrigeración desde una nueva perspectiva holística. • Validado nuestras estrategias de optimización en un entorno de simulación. ----------ABSTRACT---------- Cloud computing addresses the problem of costly computing infrastructures by providing elasticity to the dynamic resource provisioning on a pay-as-you-go basis, and nowadays it is considered as a valid alternative to owned high performance computing clusters. There are two main appealing incentives for this emerging paradigm: first, utility-based usage models provided by Clouds allow clients to pay per use, increasing the user satisfaction; then, there is only a relatively low investment required for the remote devices that access the Cloud resources [1]. Computational demand on data centers is increasing due to growing popularity of Cloud applications. However, these facilities are becoming unsustainable in terms of power consumption and growing energy costs. Nowadays, the data center industry consumes about 2% of the worldwide energy production [2]. Also, the proliferation of urban data centers is responsible for the increasing power demand of up to 70% in metropolitan areas where the power density is becoming too high for the power grid [3]. In two or three years, this situation will cause outages in the 95% of urban data centers incurring in annual costs of about US2 million per infrastructure [4]. Besides the economical impact, the heat and the carbon footprint generated by cooling systems in data centers are dramatically increasing and they are expected to overtake airline industry emissions by 2020 [5]. The Cloud model is helping to mitigate this issue, reducing carbon footprint per executed task and diminishing CO2 emissions [6], by increasing data centers overall utilization. According to the Schneider Electric‘s report on virtualization and Cloud computing efficiency [7], Cloud computing offers around 17% reduction in energy consumption by sharing computing resources among all users. However, Cloud providers need to implement an energy-efficient management of physical resources to meet the growing demand of their services while ensuring sustainability. The main sources of energy consumption in data centers are due to computational Information Technology (IT) and cooling infrastructures. IT represents around 60% of the total consumption, where the static power dissipation of idle servers is the dominant contribution. On the other hand, the cooling infrastructure originates around 30% of the overall consumption to ensure the reliability of the computational infrastructure [8]. The key factor that affects cooling requirements is the maximum temperature reached on the servers due to their activity, depending on both room temperature and workload allocation. Static consumption of servers represents about 70% of the IT power [9]. This issue is intensified by the exponential influence of temperature on the leakage currents. Leakage power is a component of the total power consumption in data centers that is not traditionally considered in the set point temperature of the room. However, the effect of this power contribution, increased with temperature, can determine the savings associated with the proactive management of the cooling system. One of the major challenges to understand the thermal influence on static energy at the data center scope consists in the description of the trade-offs between leakage and cooling consumption. The Cloud model is helping to reduce the static consumption from two perspectives based on VM allocation and consolidation. First, power-aware policies reduce the static consumption by increasing overall utilization, so the operating server set can be reduced. Dynamic Voltage and Frequency Scaling (DVFS) is applied for power capping, lowering servers’ energy consumption. Then, thermal-aware strategies help to reduce hot spots in the IT infrastructure by spreading the workload, so the set point room temperature can be increased resulting in cooling savings. Both thermal and power approaches have the potential to improve energy efficiency in Cloud facilities. Unfortunately, these policies are not jointly applied due to the lack of models that include parameters from both power and thermal approaches. Deriving fast and accurate power models that incorporate these characteristics, targeting high-end servers, would allow us to combine power and temperature together in an energy efficient management. Furthermore, as Cloud applications expect services to be delivered as per Service Level Agreement (SLA), power consumption in data centers has to be minimized while meeting this requirement whenever it is feasible. Also, as opposed to HPC, Cloud workloads vary significantly over time, making optimal allocation and DVFS configuration not a trivial task. A major challenge to guarantee QoS for Cloud applications consists in analyzing the trade-offs between consolidation and performance that help to combine DVFS with power and thermal strategies. The main objective of this Ph.D. thesis is to address the energy challenge in Cloud data centers from a thermal and power-aware perspective using proactive strategies. Our research proposes the design and implementation of models and global optimizations that jointly consider energy consumption of both computing and cooling resources while maintaining QoS from a new holistic perspective. Thesis Contributions: To support the thesis that our research can deliver significant value in the area of Cloud energy-efficiency, compared to traditional approaches, we have: • Defined a taxonomy on energy efficiency that compiles the different levels of abstraction that can be found in data centers area. • Classified state-of-the-art approaches according to the proposed taxonomy, identifying new open challenges from a holistic perspective. • Identified the trade-offs between leakage and cooling consumption based on empirical research. • Proposed novel modeling techniques for the automatic identification of fast and accurate models, providing testing in a real environment. • Analyzed DVFS, performance and power trade-offs in the Cloud environment. • Designed and implemented a novel proactive optimization policy for dynamic consolidation of virtual machines that combine DVFS and power-aware strategies while ensuring QoS. • Derived thermal models for CPU and memory devices validated in real environment. • Designed and implemented new proactive approaches that include DVFS, thermal and power considerations in both cooling and IT consumption from a novel holistic perspective. • Validated our optimization strategies in simulation environment

    Inspection report A report into the findings of an inspection carried out by the Registration and Inspection Unit at the Homestead, 1998/99

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:98/27708 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Evolutionary power modeling for energy efficiency in CPU-GPU based systems

    Full text link
    Supercomputers have reached a massive energy consumption due to computational demand, so there is an urgent necessity to keep them on a more scalable curve. In the last years, there has been a rising interest in reducing the power consumption of these systems. Recently research works focus on the adjustment of their power states by reducing clock frequency, applying power capping, and on the analysis of the thermal impact on static consumption. These techniques rely on power models to predict the power consumption of the infrastructure. However, the power consumption in these complex systems involves a vast number of interacting variables of different nature that may include non-linear dependencies. So, extracting the relationships between the most representative parameters and the power consumption requires an enormous effort and knowledge about the problem. We propose an automatic method based on Grammatical Evolution to obtain a model that minimizes the power prediction error of a supercomputer node that incorporates both CPU and GPU devices. We monitor the system during runtime using performance counters and frequency, temperature and power measurements. This evolutionary technique provides both Feature Engineering and Symbolic Regression to infer accurate models, which only depend on the most suitable variables, with little designers expertise requirements and effort. Our work improves the possibilities of deriving proactive energy-efficient policies in supercomputers that are simultaneously aware of complex considerations of different nature

    Edge federation simulator for data stream analytics

    Full text link
    The technological revolution of the Internet of Things (IoT) is transforming our society by registering and analyzing users and infrastructures? behavior in order to develop new services for improving life quality and resource management. IoT-based applications demand a vast amount of both localized and location-based information services. For these scenarios, current cloud-based services appear to be inefficient in terms of latency, throughput and power consumption. Edge computing proposes new infrastructures for effective real-time decision making. These facilities should be able to process a vast amount of data from multiple geographically distributed sources. To that end, new urban edge data centers are to be deployed, bringing computing resources closer to data sources while reducing both core network congestion and overall energy demand. This paper presents an Edge Federation simulator for data stream analytics in a 5G scenario that provides the necessary resource management for efficient service-oriented computing

    Predictive GPU-based ADAS management in energy-conscious smart cities

    Full text link
    The demand of novel IoT and smart city applications is increasing significantly and it is expected that by 2020 the number of connected devices will reach 20.41 billion. Many of these applications and services manage real-time data analytics with high volumes of data, thus requiring an efficient computing infrastructure. Edge computing helps to enable this scenario improving service latency and reducing network saturation. This computing paradigm consists on the deployment of numerous smaller data centers located near the data sources. The energy efficiency is a key challenge to implement this scenario, and the management of federated edge data centers would benefit from the use of microgrid energy sources parameterized by user's demands. In this research we propose an ANN predictive power model for GPU-based federated edge data centers based on data traffic demanded by the application. We validate our approach, using real traffic for a state-of-the-art driving assistance application, obtaining 1 hour ahead power predictions with a normalized root-mean-square deviation below 7.4% when compared with real measurements. Our research would help to optimize both resource management and sizing of edge federations
    corecore